Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 58
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Sci Adv ; 10(7): eadk0010, 2024 Feb 16.
Artigo em Inglês | MEDLINE | ID: mdl-38363839

RESUMO

Melody is a core component of music in which discrete pitches are serially arranged to convey emotion and meaning. Perception varies along several pitch-based dimensions: (i) the absolute pitch of notes, (ii) the difference in pitch between successive notes, and (iii) the statistical expectation of each note given prior context. How the brain represents these dimensions and whether their encoding is specialized for music remains unknown. We recorded high-density neurophysiological activity directly from the human auditory cortex while participants listened to Western musical phrases. Pitch, pitch-change, and expectation were selectively encoded at different cortical sites, indicating a spatial map for representing distinct melodic dimensions. The same participants listened to spoken English, and we compared responses to music and speech. Cortical sites selective for music encoded expectation, while sites that encoded pitch and pitch-change in music used the same neural code to represent equivalent properties of speech. Findings reveal how the perception of melody recruits both music-specific and general-purpose sound representations.


Assuntos
Córtex Auditivo , Música , Humanos , Percepção da Altura Sonora/fisiologia , Córtex Auditivo/fisiologia , Encéfalo/fisiologia , Idioma
3.
bioRxiv ; 2023 Oct 19.
Artigo em Inglês | MEDLINE | ID: mdl-37905047

RESUMO

Melody is a core component of music in which discrete pitches are serially arranged to convey emotion and meaning. Perception of melody varies along several pitch-based dimensions: (1) the absolute pitch of notes, (2) the difference in pitch between successive notes, and (3) the higher-order statistical expectation of each note conditioned on its prior context. While humans readily perceive melody, how these dimensions are collectively represented in the brain and whether their encoding is specialized for music remains unknown. Here, we recorded high-density neurophysiological activity directly from the surface of human auditory cortex while Western participants listened to Western musical phrases. Pitch, pitch-change, and expectation were selectively encoded at different cortical sites, indicating a spatial code for representing distinct dimensions of melody. The same participants listened to spoken English, and we compared evoked responses to music and speech. Cortical sites selective for music were systematically driven by the encoding of expectation. In contrast, sites that encoded pitch and pitch-change used the same neural code to represent equivalent properties of speech. These findings reveal the multidimensional nature of melody encoding, consisting of both music-specific and domain-general sound representations in auditory cortex. Teaser: The human brain contains both general-purpose and music-specific neural populations for processing distinct attributes of melody.

4.
Nat Commun ; 14(1): 4309, 2023 07 18.
Artigo em Inglês | MEDLINE | ID: mdl-37463907

RESUMO

Speech processing requires extracting meaning from acoustic patterns using a set of intermediate representations based on a dynamic segmentation of the speech stream. Using whole brain mapping obtained in fMRI, we investigate the locus of cortical phonemic processing not only for single phonemes but also for short combinations made of diphones and triphones. We find that phonemic processing areas are much larger than previously described: they include not only the classical areas in the dorsal superior temporal gyrus but also a larger region in the lateral temporal cortex where diphone features are best represented. These identified phonemic regions overlap with the lexical retrieval region, but we show that short word retrieval is not sufficient to explain the observed responses to diphones. Behavioral studies have shown that phonemic processing and lexical retrieval are intertwined. Here, we also have identified candidate regions within the speech cortical network where this joint processing occurs.


Assuntos
Percepção da Fala , Fala , Humanos , Fala/fisiologia , Lobo Temporal/diagnóstico por imagem , Lobo Temporal/fisiologia , Encéfalo/fisiologia , Percepção da Fala/fisiologia , Mapeamento Encefálico , Imageamento por Ressonância Magnética , Córtex Cerebral/diagnóstico por imagem
5.
Curr Biol ; 33(10): R418-R420, 2023 05 22.
Artigo em Inglês | MEDLINE | ID: mdl-37220737

RESUMO

Native speakers of tonal languages show enhanced musical melody perception but diminished rhythm abilities. This effect has now been rigorously demonstrated in a new study that tested the musical IQ of half a million human participants across the globe.


Assuntos
Música , Canto , Voz , Humanos , Idioma
6.
J Neurosci ; 43(14): 2579-2596, 2023 04 05.
Artigo em Inglês | MEDLINE | ID: mdl-36859308

RESUMO

Many social animals can recognize other individuals by their vocalizations. This requires a memory system capable of mapping incoming acoustic signals to one of many known individuals. Using the zebra finch, a social songbird that uses songs and distance calls to communicate individual identity (Elie and Theunissen, 2018), we tested the role of two cortical-like brain regions in a vocal recognition task. We found that the rostral region of the Cadomedial Nidopallium (NCM), a secondary auditory region of the avian pallium, was necessary for maintaining auditory memories for conspecific vocalizations in both male and female birds, whereas HVC (used as a proper name), a premotor areas that gates auditory input into the vocal motor and song learning pathways in male birds (Roberts and Mooney, 2013), was not. Both NCM and HVC have previously been implicated for processing the tutor song in the context of song learning (Sakata and Yazaki-Sugiyama, 2020). Our results suggest that NCM might not only store songs as templates for future vocal imitation but also songs and calls for perceptual discrimination of vocalizers in both male and female birds. NCM could therefore operate as a site for auditory memories for vocalizations used in various facets of communication. We also observed that new auditory memories could be acquired without intact HVC or NCM but that for these new memories NCM lesions caused deficits in either memory capacity or auditory discrimination. These results suggest that the high-capacity memory functions of the avian pallial auditory system depend on NCM.SIGNIFICANCE STATEMENT Many aspects of vocal communication require the formation of auditory memories. Voice recognition, for example, requires a memory for vocalizers to identify acoustical features. In both birds and primates, the locus and neural correlates of these high-level memories remain poorly described. Previous work suggests that this memory formation is mediated by high-level sensory areas, not traditional memory areas such as the hippocampus. Using lesion experiments, we show that one secondary auditory brain region in songbirds that had previously been implicated in storing song memories for vocal imitation is also implicated in storing vocal memories for individual recognition. The role of the neural circuits in this region in interpreting the meaning of communication calls should be investigated in the future.


Assuntos
Tentilhões , Vocalização Animal , Animais , Masculino , Feminino , Estimulação Acústica , Aprendizagem , Encéfalo , Percepção Auditiva
7.
Curr Biol ; 32(10): R482-R493, 2022 05 23.
Artigo em Inglês | MEDLINE | ID: mdl-35609550

RESUMO

The breadth and complexity of natural behaviors inspires awe. Understanding how our perceptions, actions, and internal thoughts arise from evolved circuits in the brain has motivated neuroscientists for generations. Researchers have traditionally approached this question by focusing on stereotyped behaviors, either natural or trained, in a limited number of model species. This approach has allowed for the isolation and systematic study of specific brain operations, which has greatly advanced our understanding of the circuits involved. At the same time, the emphasis on experimental reductionism has left most aspects of the natural behaviors that have shaped the evolution of the brain largely unexplored. However, emerging technologies and analytical tools make it possible to comprehensively link natural behaviors to neural activity across a broad range of ethological contexts and timescales, heralding new modes of neuroscience focused on natural behaviors. Here we describe a three-part roadmap that aims to leverage the wealth of behaviors in their naturally occurring distributions, linking their variance with that of underlying neural processes to understand how the brain is able to successfully navigate the everyday challenges of animals' social and ecological landscapes. To achieve this aim, experimenters must harness one challenge faced by all neurobiological systems, namely variability, in order to gain new insights into the language of the brain.


Assuntos
Encéfalo , Neurociências , Animais , Idioma
8.
Nat Commun ; 11(1): 4970, 2020 10 02.
Artigo em Inglês | MEDLINE | ID: mdl-33009414

RESUMO

Communicating species identity is a key component of many animal signals. However, whether selection for species recognition systematically increases signal diversity during clade radiation remains debated. Here we show that in woodpecker drumming, a rhythmic signal used during mating and territorial defense, the amount of species identity information encoded remained stable during woodpeckers' radiation. Acoustic analyses and evolutionary reconstructions show interchange among six main drumming types despite strong phylogenetic contingencies, suggesting evolutionary tinkering of drumming structure within a constrained acoustic space. Playback experiments and quantification of species discriminability demonstrate sufficient signal differentiation to support species recognition in local communities. Finally, we only find character displacement in the rare cases where sympatric species are also closely related. Overall, our results illustrate how historical contingencies and ecological interactions can promote conservatism in signals during a clade radiation without impairing the effectiveness of information transfer relevant to inter-specific discrimination.


Assuntos
Comunicação Animal , Evolução Biológica , Passeriformes/fisiologia , Acústica , Animais , Ecossistema , Teoria da Informação , Filogenia , Especificidade da Espécie , Simpatria
9.
Nat Commun ; 11(1): 2914, 2020 06 04.
Artigo em Inglês | MEDLINE | ID: mdl-32499545

RESUMO

An amendment to this paper has been published and can be accessed via a link at the top of the paper.

10.
Sci Rep ; 10(1): 3561, 2020 Feb 21.
Artigo em Inglês | MEDLINE | ID: mdl-32081889

RESUMO

An amendment to this paper has been published and can be accessed via a link at the top of the paper.

11.
PLoS Comput Biol ; 15(9): e1006698, 2019 09.
Artigo em Inglês | MEDLINE | ID: mdl-31557151

RESUMO

Although information theoretic approaches have been used extensively in the analysis of the neural code, they have yet to be used to describe how information is accumulated in time while sensory systems are categorizing dynamic sensory stimuli such as speech sounds or visual objects. Here, we present a novel method to estimate the cumulative information for stimuli or categories. We further define a time-varying categorical information index that, by comparing the information obtained for stimuli versus categories of these same stimuli, quantifies invariant neural representations. We use these methods to investigate the dynamic properties of avian cortical auditory neurons recorded in zebra finches that were listening to a large set of call stimuli sampled from the complete vocal repertoire of this species. We found that the time-varying rates carry 5 times more information than the mean firing rates even in the first 100 ms. We also found that cumulative information has slow time constants (100-600 ms) relative to the typical integration time of single neurons, reflecting the fact that the behaviorally informative features of auditory objects are time-varying sound patterns. When we correlated firing rates and information values, we found that average information correlates with average firing rate but that higher-rates found at the onset response yielded similar information values as the lower-rates found in the sustained response: the onset and sustained response of avian cortical auditory neurons provide similar levels of independent information about call identity and call-type. Finally, our information measures allowed us to rigorously define categorical neurons; these categorical neurons show a high degree of invariance for vocalizations within a call-type. Peak invariance is found around 150 ms after stimulus onset. Surprisingly, call-type invariant neurons were found in both primary and secondary avian auditory areas.


Assuntos
Córtex Auditivo , Modelos Neurológicos , Neurônios/fisiologia , Vocalização Animal/fisiologia , Estimulação Acústica , Animais , Córtex Auditivo/citologia , Córtex Auditivo/fisiologia , Biologia Computacional , Feminino , Tentilhões/fisiologia , Masculino
12.
Nat Commun ; 9(1): 4026, 2018 10 02.
Artigo em Inglês | MEDLINE | ID: mdl-30279497

RESUMO

Individual recognition is critical in social animal communication, but it has not been demonstrated for a complete vocal repertoire. Deciphering the nature of individual signatures across call types is necessary to understand how animals solve the problem of combining, in the same signal, information about identity and behavioral state. We show that distinct signatures differentiate zebra finch individuals for each call type. The distinctiveness of these signatures varies: contact calls bear strong individual signatures while calls used during aggressive encounters are less individualized. We propose that the costly solution of using multiple signatures evolved because of the limitations of the passive filtering properties of the birds' vocal organ for generating sufficiently individualized features. Thus, individual recognition requires the memorization of multiple signatures for the entire repertoire of conspecifics of interests. We show that zebra finches excel at these tasks.


Assuntos
Percepção Auditiva , Discriminação Psicológica , Tentilhões , Reconhecimento Psicológico , Vocalização Animal , Animais , Feminino , Masculino
13.
Sci Rep ; 8(1): 13826, 2018 09 14.
Artigo em Inglês | MEDLINE | ID: mdl-30218053

RESUMO

Timbre, the unique quality of a sound that points to its source, allows us to quickly identify a loved one's voice in a crowd and distinguish a buzzy, bright trumpet from a warm cello. Despite its importance for perceiving the richness of auditory objects, timbre is a relatively poorly understood feature of sounds. Here we demonstrate for the first time that listeners adapt to the timbre of a wide variety of natural sounds. For each of several sound classes, participants were repeatedly exposed to two sounds (e.g., clarinet and oboe, male and female voice) that formed the endpoints of a morphed continuum. Adaptation to timbre resulted in consistent perceptual aftereffects, such that hearing sound A significantly altered perception of a neutral morph between A and B, making it sound more like B. Furthermore, these aftereffects were robust to moderate pitch changes, suggesting that adaptation to timbral features used for object identification drives these effects, analogous to face adaptation in vision.


Assuntos
Percepção Auditiva/fisiologia , Audição/fisiologia , Percepção da Altura Sonora/fisiologia , Estimulação Acústica/métodos , Adolescente , Adulto , Feminino , Humanos , Masculino , Música , Discriminação da Altura Tonal , Psicoacústica , Som , Espectrografia do Som/métodos , Voz , Adulto Jovem
14.
Front Syst Neurosci ; 11: 61, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-29018336

RESUMO

Cognitive neuroscience has seen rapid growth in the size and complexity of data recorded from the human brain as well as in the computational tools available to analyze this data. This data explosion has resulted in an increased use of multivariate, model-based methods for asking neuroscience questions, allowing scientists to investigate multiple hypotheses with a single dataset, to use complex, time-varying stimuli, and to study the human brain under more naturalistic conditions. These tools come in the form of "Encoding" models, in which stimulus features are used to model brain activity, and "Decoding" models, in which neural features are used to generated a stimulus output. Here we review the current state of encoding and decoding models in cognitive electrophysiology and provide a practical guide toward conducting experiments and analyses in this emerging field. Our examples focus on using linear models in the study of human language and audition. We show how to calculate auditory receptive fields from natural sounds as well as how to decode neural recordings to predict speech. The paper aims to be a useful tutorial to these approaches, and a practical introduction to using machine learning and applied statistics to build models of neural activity. The data analytic approaches we discuss may also be applied to other sensory modalities, motor systems, and cognitive systems, and we cover some examples in these areas. In addition, a collection of Jupyter notebooks is publicly available as a complement to the material covered in this paper, providing code examples and tutorials for predictive modeling in python. The aim is to provide a practical understanding of predictive modeling of human brain data and to propose best-practices in conducting these analyses.

15.
Front Comput Neurosci ; 11: 68, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-28824408

RESUMO

The signal transformations that take place in high-level sensory regions of the brain remain enigmatic because of the many nonlinear transformations that separate responses of these neurons from the input stimuli. One would like to have dimensionality reduction methods that can describe responses of such neurons in terms of operations on a large but still manageable set of relevant input features. A number of methods have been developed for this purpose, but often these methods rely on the expansion of the input space to capture as many relevant stimulus components as statistically possible. This expansion leads to a lower effective sampling thereby reducing the accuracy of the estimated components. Alternatively, so-called low-rank methods explicitly search for a small number of components in the hope of achieving higher estimation accuracy. Even with these methods, however, noise in the neural responses can force the models to estimate more components than necessary, again reducing the methods' accuracy. Here we describe how a flexible regularization procedure, together with an explicit rank constraint, can strongly improve the estimation accuracy compared to previous methods suitable for characterizing neural responses to natural stimuli. Applying the proposed low-rank method to responses of auditory neurons in the songbird brain, we find multiple relevant components making up the receptive field for each neuron and characterize their computations in terms of logical OR and AND computations. The results highlight potential differences in how invariances are constructed in visual and auditory systems.

16.
J Neurosci ; 37(27): 6539-6557, 2017 07 05.
Artigo em Inglês | MEDLINE | ID: mdl-28588065

RESUMO

Speech comprehension requires that the brain extract semantic meaning from the spectral features represented at the cochlea. To investigate this process, we performed an fMRI experiment in which five men and two women passively listened to several hours of natural narrative speech. We then used voxelwise modeling to predict BOLD responses based on three different feature spaces that represent the spectral, articulatory, and semantic properties of speech. The amount of variance explained by each feature space was then assessed using a separate validation dataset. Because some responses might be explained equally well by more than one feature space, we used a variance partitioning analysis to determine the fraction of the variance that was uniquely explained by each feature space. Consistent with previous studies, we found that speech comprehension involves hierarchical representations starting in primary auditory areas and moving laterally on the temporal lobe: spectral features are found in the core of A1, mixtures of spectral and articulatory in STG, mixtures of articulatory and semantic in STS, and semantic in STS and beyond. Our data also show that both hemispheres are equally and actively involved in speech perception and interpretation. Further, responses as early in the auditory hierarchy as in STS are more correlated with semantic than spectral representations. These results illustrate the importance of using natural speech in neurolinguistic research. Our methodology also provides an efficient way to simultaneously test multiple specific hypotheses about the representations of speech without using block designs and segmented or synthetic speech.SIGNIFICANCE STATEMENT To investigate the processing steps performed by the human brain to transform natural speech sound into meaningful language, we used models based on a hierarchical set of speech features to predict BOLD responses of individual voxels recorded in an fMRI experiment while subjects listened to natural speech. Both cerebral hemispheres were actively involved in speech processing in large and equal amounts. Also, the transformation from spectral features to semantic elements occurs early in the cortical speech-processing stream. Our experimental and analytical approaches are important alternatives and complements to standard approaches that use segmented speech and block designs, which report more laterality in speech processing and associated semantic processing to higher levels of cortex than reported here.


Assuntos
Córtex Cerebral/fisiologia , Modelos Neurológicos , Rede Nervosa/fisiologia , Percepção da Fala/fisiologia , Adulto , Simulação por Computador , Feminino , Humanos , Masculino , Vias Neurais/fisiologia
17.
J Neurosci ; 37(13): 3491-3510, 2017 03 29.
Artigo em Inglês | MEDLINE | ID: mdl-28235893

RESUMO

One of the most complex tasks performed by sensory systems is "scene analysis": the interpretation of complex signals as behaviorally relevant objects. The study of this problem, universal to species and sensory modalities, is particularly challenging in audition, where sounds from various sources and localizations, degraded by propagation through the environment, sum to form a single acoustical signal. Here we investigated in a songbird model, the zebra finch, the neural substrate for ranging and identifying a single source. We relied on ecologically and behaviorally relevant stimuli, contact calls, to investigate the neural discrimination of individual vocal signature as well as sound source distance when calls have been degraded through propagation in a natural environment. Performing electrophysiological recordings in anesthetized birds, we found neurons in the auditory forebrain that discriminate individual vocal signatures despite long-range degradation, as well as neurons discriminating propagation distance, with varying degrees of multiplexing between both information types. Moreover, the neural discrimination performance of individual identity was not affected by propagation-induced degradation beyond what was induced by the decreased intensity. For the first time, neurons with distance-invariant identity discrimination properties as well as distance-discriminant neurons are revealed in the avian auditory cortex. Because these neurons were recorded in animals that had prior experience neither with the vocalizers of the stimuli nor with long-range propagation of calls, we suggest that this neural population is part of a general-purpose system for vocalizer discrimination and ranging.SIGNIFICANCE STATEMENT Understanding how the brain makes sense of the multitude of stimuli that it continually receives in natural conditions is a challenge for scientists. Here we provide a new understanding of how the auditory system extracts behaviorally relevant information, the vocalizer identity and its distance to the listener, from acoustic signals that have been degraded by long-range propagation in natural conditions. We show, for the first time, that single neurons, in the auditory cortex of zebra finches, are capable of discriminating the individual identity and sound source distance in conspecific communication calls. The discrimination of identity in propagated calls relies on a neural coding that is robust to intensity changes, signals' quality, and decreases in the signal-to-noise ratio.


Assuntos
Potenciais de Ação/fisiologia , Comunicação Animal , Córtex Auditivo/fisiologia , Tentilhões/fisiologia , Células Receptoras Sensoriais/fisiologia , Identificação Social , Estimulação Acústica/métodos , Animais , Feminino , Masculino , Socialização
18.
Nat Commun ; 7: 13654, 2016 12 20.
Artigo em Inglês | MEDLINE | ID: mdl-27996965

RESUMO

Experience shapes our perception of the world on a moment-to-moment basis. This robust perceptual effect of experience parallels a change in the neural representation of stimulus features, though the nature of this representation and its plasticity are not well-understood. Spectrotemporal receptive field (STRF) mapping describes the neural response to acoustic features, and has been used to study contextual effects on auditory receptive fields in animal models. We performed a STRF plasticity analysis on electrophysiological data from recordings obtained directly from the human auditory cortex. Here, we report rapid, automatic plasticity of the spectrotemporal response of recorded neural ensembles, driven by previous experience with acoustic and linguistic information, and with a neurophysiological effect in the sub-second range. This plasticity reflects increased sensitivity to spectrotemporal features, enhancing the extraction of more speech-like features from a degraded stimulus and providing the physiological basis for the observed 'perceptual enhancement' in understanding speech.


Assuntos
Córtex Auditivo/fisiologia , Inteligibilidade da Fala/fisiologia , Estimulação Acústica , Animais , Córtex Auditivo/anatomia & histologia , Percepção Auditiva/fisiologia , Mapeamento Encefálico , Eletrocorticografia , Potenciais Evocados Auditivos , Humanos , Plasticidade Neuronal/fisiologia , Fonética
19.
Nature ; 532(7600): 453-8, 2016 Apr 28.
Artigo em Inglês | MEDLINE | ID: mdl-27121839

RESUMO

The meaning of language is represented in regions of the cerebral cortex collectively known as the 'semantic system'. However, little of the semantic system has been mapped comprehensively, and the semantic selectivity of most regions is unknown. Here we systematically map semantic selectivity across the cortex using voxel-wise modelling of functional MRI (fMRI) data collected while subjects listened to hours of narrative stories. We show that the semantic system is organized into intricate patterns that seem to be consistent across individuals. We then use a novel generative model to create a detailed semantic atlas. Our results suggest that most areas within the semantic system represent information about specific semantic domains, or groups of related concepts, and our atlas shows which domains are represented in each area. This study demonstrates that data-driven methods--commonplace in studies of human neuroanatomy and functional connectivity--provide a powerful and efficient means for mapping functional representations in the brain.


Assuntos
Mapeamento Encefálico , Córtex Cerebral/anatomia & histologia , Córtex Cerebral/fisiologia , Semântica , Fala , Adulto , Percepção Auditiva , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Narração , Análise de Componente Principal , Reprodutibilidade dos Testes
20.
Anim Cogn ; 19(2): 285-315, 2016 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-26581377

RESUMO

Although a universal code for the acoustic features of animal vocal communication calls may not exist, the thorough analysis of the distinctive acoustical features of vocalization categories is important not only to decipher the acoustical code for a specific species but also to understand the evolution of communication signals and the mechanisms used to produce and understand them. Here, we recorded more than 8000 examples of almost all the vocalizations of the domesticated zebra finch, Taeniopygia guttata: vocalizations produced to establish contact, to form and maintain pair bonds, to sound an alarm, to communicate distress or to advertise hunger or aggressive intents. We characterized each vocalization type using complete representations that avoided any a priori assumptions on the acoustic code, as well as classical bioacoustics measures that could provide more intuitive interpretations. We then used these acoustical features to rigorously determine the potential information-bearing acoustical features for each vocalization type using both a novel regularized classifier and an unsupervised clustering algorithm. Vocalization categories are discriminated by the shape of their frequency spectrum and by their pitch saliency (noisy to tonal vocalizations) but not particularly by their fundamental frequency. Notably, the spectral shape of zebra finch vocalizations contains peaks or formants that vary systematically across categories and that would be generated by active control of both the vocal organ (source) and the upper vocal tract (filter).


Assuntos
Algoritmos , Tentilhões/fisiologia , Vocalização Animal , Animais , Feminino , Masculino , Comportamento Social , Espectrografia do Som
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...